Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.
translated by 谷歌翻译
从互联网上刮下来的数据集对于大规模机器学习的成功至关重要。然而,这一成功使未来互联网衍生的数据集的实用性处于潜在的风险,因为模型输出开始取代人类注释作为监督的来源。在这项工作中,我们首先将与一个模型的交互作用记录为历史并将其作为培训数据的系统形式化。然后,我们通过跟踪对测试时间偏差统计量的变化(例如,模型预测的性别偏差)来分析其稳定性。我们发现,偏置扩增的程度与模型的输出的行为是否像训练分布中的样本一样,我们表征并将其定义为一致的校准。在三种条件预测方案中的实验 - 图像分类,视觉角色标记和语言生成 - 证明表现出样本样行为的模型更加校准,因此更稳定。基于这种见解,我们提出了一项干预措施,以帮助校准和稳定不稳定的反馈系统。代码可从https://github.com/rtaori/da​​ta_feedback获得。
translated by 谷歌翻译
Overparameterized neural networks can be highly accurate on average on an i.i.d.test set yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization-a stronger-than-typical 2 penalty or early stopping-we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
translated by 谷歌翻译
Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparityminority groups (e.g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.
translated by 谷歌翻译
Recent work has identified noisy and misannotated data as a core cause of hallucinations and unfaithful outputs in Natural Language Generation (NLG) tasks. Consequently, identifying and removing these examples is a key open challenge in creating reliable NLG systems. In this work, we introduce a framework to identify and remove low-quality training instances that lead to undesirable outputs, such as faithfulness errors in text summarization. We show that existing approaches for error tracing, such as gradient-based influence measures, do not perform reliably for detecting faithfulness errors in summarization. We overcome the drawbacks of existing error tracing methods through a new, contrast-based estimate that compares undesired generations to human-corrected outputs. Our proposed method can achieve a mean average precision of 0.91 across synthetic tasks with known ground truth and can achieve a two-fold reduction in hallucinations on a real entity hallucination evaluation on the NYT dataset.
translated by 谷歌翻译
Task-oriented dialogue systems often assist users with personal or confidential matters. For this reason, the developers of such a system are generally prohibited from observing actual usage. So how can they know where the system is failing and needs more training data or new functionality? In this work, we study ways in which realistic user utterances can be generated synthetically, to help increase the linguistic and functional coverage of the system, without compromising the privacy of actual users. To this end, we propose a two-stage Differentially Private (DP) generation method which first generates latent semantic parses, and then generates utterances based on the parses. Our proposed approach improves MAUVE by 3.8$\times$ and parse tree node-type overlap by 1.4$\times$ relative to current approaches for private synthetic data generation, improving both on fluency and semantic coverage. We further validate our approach on a realistic domain adaptation task of adding new functionality from private user data to a semantic parser, and show gains of 1.3$\times$ on its accuracy with the new feature.
translated by 谷歌翻译
Machine learning models are now able to convert user-written text descriptions into naturalistic images. These models are available to anyone online and are being used to generate millions of images a day. We investigate these models and find that they amplify dangerous and complex stereotypes. Moreover, we find that the amplified stereotypes are difficult to predict and not easily mitigated by users or model owners. The extent to which these image-generation models perpetuate and amplify stereotypes and their mass deployment is cause for serious concern.
translated by 谷歌翻译
尽管自我监督学习(SSL)方法取得了经验成功,但尚不清楚其表示的哪些特征导致了高下游精度。在这项工作中,我们表征了SSL表示应该满足的属性。具体而言,我们证明了必要和充分的条件,因此,对于给出的数据增强的任何任务,在该表示形式上训练的所需探针(例如,线性或MLP)具有完美的准确性。这些要求导致一个统一的概念框架,用于改善现有的SSL方法并得出新方法。对于对比度学习,我们的框架规定了对以前的方法(例如使用不对称投影头)的简单但重大改进。对于非对比度学习,我们使用框架来得出一个简单新颖的目标。我们所得的SSL算法在标准基准测试上的表现优于基线,包括Imagenet线性探测的SHAV+多螺旋桨。
translated by 谷歌翻译
剪辑的发展[Radford等,2021]引发了关于语言监督是否可以导致与传统仅图像方法更可转移表示的视觉模型的争论。我们的工作通过对两种方法的学习能力进行了对下游分类任务的学习能力进行仔细控制的比较来研究这个问题。我们发现,当预训练数据集符合某些标准时 - 它足够大,并且包含具有较低变异性的描述性字幕 - 仅图像的方法也与剪辑的传输性能不匹配,即使它们接受了更多图像数据的培训。但是,与人们期望的相反,在某些情况下,没有满足这些标准,其中通过标题增加的监督实际上是有害的。在我们的发现的激励下,我们设计了简单的处方,以使剪辑能够更好地利用现有预训练数据集中存在的语言信息。
translated by 谷歌翻译
大型预审慎的模型可以私下微调以实现非私有模型的性能。这些结果中的一个共同主题是令人惊讶的观察结果,即高维模型可以实现有利的隐私性权衡。这似乎与差异私有凸学习的模型尺寸依赖性相矛盾,并提出了以下研究问题:差异私人学习的性能何时不会随着模型大小的增加而降低?我们确定投影到子空间上的梯度的幅度是决定性能的关键因素。为了确切地为私人凸学习的特征,我们引入了一个条件,即我们将限制Lipschitz的连续性限制并得出了在其他条件下与维度无关的过多经验和人口风险的界限。我们从经验上表明,在大型语言模型的私人微调中,在本地最佳距离附近评估的梯度主要由一些主要组件控制。这种行为类似于我们在凸面设置中获得尺寸独立界限的条件。我们的理论和经验结果共同为大规模私人微调成功提供了可能的解释。
translated by 谷歌翻译